docs(submit): improve arginfo guidance and agent skill#586
docs(submit): improve arginfo guidance and agent skill#586njzjz merged 11 commits intodeepmodeling:masterfrom
Conversation
Problem - submit-related arginfo and skill text were strong on field inventory but weak on operational guidance - the starter example used a subdirectory layout that is less robust for first-time local runs Change - clarify path semantics, remote_root, group_size vs para_deg, and task file transfer behavior in hard-coded arginfo - refresh submit docs, agent skill guidance, and the minimal example to prefer work_base='.' and task_work_path='.' Notes - validated with git diff --check, JSON syntax check, and python3 -m compileall dpdispatcher Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.4)
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✅ Files skipped from review due to trivial changes (3)
📝 WalkthroughWalkthroughUpdated documentation strings across dpdispatcher modules to clarify argument/help text for submission, machine, context, and scheduler/resource parameters. No runtime logic, validation, control flow, or public signatures were changed. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #586 +/- ##
==========================================
+ Coverage 48.33% 52.27% +3.94%
==========================================
Files 40 40
Lines 3958 3958
==========================================
+ Hits 1913 2069 +156
+ Misses 2045 1889 -156 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
doc/submit.md (1)
30-42: Path layout mental model is helpful but could clarify single-task constraint.The path formula and practical rules are clear. However, the recommendation at line 42 to use
task_work_path = "."doesn't mention that this pattern is primarily suitable for single-task submissions or when tasks have unique subdirectories. Users running multiple tasks with the sametask_work_pathwill experience file collisions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/submit.md` around lines 30 - 42, Clarify that using task_work_path = "." is intended for single-task submissions or when each task has its own unique subdirectory to avoid collisions; update the guidance around task_work_path, forward_files, backward_files, forward_common_files, and backward_common_files to explicitly state the risk of file collisions when multiple tasks share the same task_work_path and recommend using unique task_work_path values or per-task subdirectories (or using common files for shared assets) for multi-task submissions.examples/submit_example.json (1)
1-27: Minimal example configuration is appropriate for a single-task workflow.The updated example correctly demonstrates the simplest local pattern. However, note that this
task_work_path: "."pattern is only safe when there's a single task. If users duplicate this task entry without uniquetask_work_pathvalues, file collisions will occur since all tasks would resolve to the same directory (as shown inlocal_context.pyupload/download logic).The documentation in
doc/submit.mdand the docstring insubmission.pydo mention using"."for the "smallest local example" but don't explicitly warn about the multi-task collision scenario. Consider adding a brief note in the example or accompanying documentation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/submit_example.json` around lines 1 - 27, The example uses task_work_path: "." which is safe for a single task but will cause file collisions if duplicated; update the example or docs to warn users: add a short note in examples/submit_example.json or doc/submit.md (and mirror in the submission.py docstring) that task_work_path must be unique per task when running multiple tasks (or suggest using per-task subdirectories) and reference the upload/download behavior in local_context.py to explain why collisions occur.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@doc/submit.md`:
- Around line 53-70: Update the Validation workflow docs to either state that
the uv/uvx CLI tools are required or replace the uv-specific commands with
standard equivalents: change uses of "uv run -m json.tool" to "python -m
json.tool", "uvx --with dpdispatcher dargs check -f
dpdispatcher.entrypoints.submit.submission_args" to "dargs check -f
dpdispatcher.entrypoints.submit.submission_args", and "uvx --from dpdispatcher
dpdisp submit" to "dpdisp submit", and add a brief note mentioning that users
may need to install uv/uvx if they prefer the original commands.
---
Nitpick comments:
In `@doc/submit.md`:
- Around line 30-42: Clarify that using task_work_path = "." is intended for
single-task submissions or when each task has its own unique subdirectory to
avoid collisions; update the guidance around task_work_path, forward_files,
backward_files, forward_common_files, and backward_common_files to explicitly
state the risk of file collisions when multiple tasks share the same
task_work_path and recommend using unique task_work_path values or per-task
subdirectories (or using common files for shared assets) for multi-task
submissions.
In `@examples/submit_example.json`:
- Around line 1-27: The example uses task_work_path: "." which is safe for a
single task but will cause file collisions if duplicated; update the example or
docs to warn users: add a short note in examples/submit_example.json or
doc/submit.md (and mirror in the submission.py docstring) that task_work_path
must be unique per task when running multiple tasks (or suggest using per-task
subdirectories) and reference the upload/download behavior in local_context.py
to explain why collisions occur.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: db602867-272e-4e37-b46f-47e92b9b712b
📒 Files selected for processing (6)
doc/submit.mddpdispatcher/entrypoints/submit.pydpdispatcher/machine.pydpdispatcher/submission.pyexamples/submit_example.jsonskills/dpdisp-submit/SKILL.md
There was a problem hiding this comment.
Pull request overview
Updates DPDispatcher submit documentation and arginfo text to be more workflow-oriented, clarifying path semantics, task file transfer behavior, and grouping vs parallelism, while refreshing the agent skill and the example config to use safer minimal local defaults.
Changes:
- Reworded submit-related
arginfo/schema docs for tasks, resources, and machine fields to reflect common workflows and pitfalls. - Added a quickstart + mental model + validation workflow to
doc/submit.md. - Simplified the starter example submission config to the smallest stable local pattern (
work_base=".",task_work_path=".",para_deg=1).
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| skills/dpdisp-submit/SKILL.md | Refreshes agent-facing submit workflow guidance, defaults, and triage steps. |
| examples/submit_example.json | Updates the sample submission to use stable minimal local paths and para_deg. |
| dpdispatcher/submission.py | Improves task/resources arginfo strings (paths, file transfer semantics, group_size vs para_deg, etc.). |
| dpdispatcher/machine.py | Improves machine arginfo strings (local_root/remote_root meaning, cleanup implications). |
| dpdispatcher/entrypoints/submit.py | Improves top-level submission schema docs (paths, common vs per-task file transfers). |
| doc/submit.md | Adds quickstart, mental model, grouping note, and validation workflow for users. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- revert submit.md and submit_example.json - reduce skill changes to a minimal note - keep parameter guidance on the corresponding arginfo fields - tighten remote_root wording to match the code paths more closely Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.4)
8f1440a to
6ed7976
Compare
for more information, see https://pre-commit.ci
Remove the extra local-default sentence from dpdisp-submit so the skill stays minimal and keeps parameter guidance in arginfo.\n\nAuthored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.4)
- describe outlog/errlog as filenames used inside task_work_path while the task runs - clarify that local paths are only the typical synchronized destination - improve SSH-specific arginfo for remote_profile and SSH connection fields Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.4)
for more information, see https://pre-commit.ci
- improve LocalContext and BohriumContext-specific machine arginfo - clarify Slurm, SlurmJobArray, SGE, LSF, and JH_UniScheduler resource kwargs - keep scheduler/context-specific guidance in the corresponding files Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.4)
for more information, see https://pre-commit.ci
Update Bohrium context arginfo text to use the current product name instead of DP Cloud Server while keeping code-level aliases unchanged. Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.4)
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 10 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- clarify that task_work_path should be relative within work_base - describe backward_files in terms of the local staging root - document how absolute work_base interacts with machine.local_root Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.4)
Clarify in submit arginfo that work_base is expected to be relative to machine.local_root and that absolute paths bypass that combination. Authored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.4)
Problem
group_size/para_deg, and some machine/resource fields are easy to misread from the current wording aloneChange
work_base,forward_common_files,backward_common_files,local_root,remote_root,task_work_path,forward_files,backward_files,group_size,para_deg,wait_time, and related fieldsNotes
doc/submit.mdandexamples/submit_example.jsongit diff --check, JSON syntax check, andpython3 -m compileall dpdispatcherAuthored by OpenClaw (model: custom-chat-jinzhezeng-group/gpt-5.4)
Summary by CodeRabbit